How many times has our community solved the sampling problem? I think it’s a fair question. You know I’m talking about claims rather than actual solutions. And many if not most of those claims are made in the abstracts of papers, even when the data paints a more limited story. I think our abstracts are the problem.

Few people read papers in detail and we often make judgements largely based on the abstract and conclusions supplied by authors. When I read the abstracts of papers from our community, I wonder if we don’t exaggerate too often.

I have become that annoying referee that bullies authors into toning down their abstracts to match their actual results. (Yes, that was me.) I want to encourage you to develop that same annoying tendency when you referee papers.

We all will benefit from more accurate reporting. Our colleagues won’t have unreasonable expectations of us, and we won’t have to read every paper with a big claim. What a delight it would be to read, “Although our method worked well by design in our system of interest, it’s unlikely to work well in your system.”

Now it hit me, and I’m sure a few of you had the same thought, that I may have solved the sampling problem myself more than once! I went back to some abstracts from papers of mine that I genuinely thought were major advances at the time. What had I said? It was a relief that most were pretty honest and precise. But a few had statements that may not have been exaggerations but perhaps were ill-considered in retrospect. OK, they were exaggerations.

One abstract said the following: “The absolute free energy—or partition function, equivalently—of a molecule can be estimated computationally using a suitable reference system. Here, we demonstrate a practical method for staging such calculations by growing a molecule based on a series of fragments.” Hmm, a practical method … for any size system? Not really. Should have qualified that.

Another said, “For a two-dimensional ‘‘toy’’ test system, the new minimally optimized method performs roughly one hundred times faster than either optimized ‘‘traditional’’ Jarzynski calculations or conventional thermodynamic integration. The simplicity of the underlying formalism suggests the approach will find broad applicability in molecular systems.” I really thought so. But it wasn’t quite true.

A third said, “the [weighted ensemble] WE method is readily applied to more chemically accurate models, and by studying a series of lower temperatures, our results suggest that the WE method can increase efficiency by orders of magnitude in more challenging systems.” This sort-of proved to be true, but it’s been a hard road and I’ve already confessed that WE cannot solve every problem.

Abstracts are a key tool in our business, so be honest. Your later self will thank you. And be a tough referee. Their later selves will thank you.

Postscript. While we’re on the subject of abstracts, I have another suggestion. Try writing your abstract before doing your research. (Of course, check it after you’re done to be sure it’s accurate!) The idea is to establish the big picture of your project, so you can see where you’re going. You want to avoid unimportant rabbit holes during long months of research – and make sure you accomplish something important and new. I often encourage students and postdocs to draft an abstract at the start of a project. Try it!

UPDATE [Feb. 13, 2020]: Ten days after I made this post (Dec 7, 2019), the New York Times published a related piece suggesting that one gender was more likely to up-sell their work with self-congratulatory description. (Guess which one?) This made me think more about the gender issue in our own field. Have something to say on this issue? Write me: zuckermd@ohsu.edu. I hope to do a post on this in the future, ideally relating the experiences of several individuals, anonymously if they wish.